# GGUF lightweight
Minicpm4 8B Q8 0 GGUF
Apache-2.0
MiniCPM4-8B-Q8_0-GGUF is a model converted from openbmb/MiniCPM4-8B to GGUF format via llama.cpp, suitable for local inference.
Large Language Model
Transformers Supports Multiple Languages

M
AyyYOO
160
2
Auraflow V0.3 Gguf
Apache-2.0
This is a direct GGUF format conversion of fal/AuraFlow-v0.3, primarily used for text-to-image generation tasks.
Text-to-Image
A
city96
1,085
4
Featured Recommended AI Models